olmOCR-7B-0725

2.5K
61
7.0B
1 language
license:apache-2.0
by
allenai
Image Model
OTHER
7B params
New
3K downloads
Early-stage
Edge AI:
Mobile
Laptop
Server
16GB+ RAM
Mobile
Laptop
Server
Quick Summary

This is a release of the olmOCR model that's fine tuned from Qwen2.

Device Compatibility

Mobile
4-6GB RAM
Laptop
16GB RAM
Server
GPU
Minimum Recommended
7GB+ RAM

Code Examples

pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)
pythontransformers
import torch from transformers import AutoModelForImageTextToText, AutoProcessor 

model_id = "allenai/olmOCR-7B-0725" 
processor = AutoProcessor.from_pretrained(model_id)
model = AutoModelForImageTextToText.from_pretrained(model_id, torch_dtype=torch.float16 ).to("cuda").eval()

PROMPT = """
Below is the image of one page of a PDF document , as well as some raw textual content that
was previously extracted for it that includes position information for each image and
block of text ( The origin [0 x0 ] of the coordinates is in the lower left corner of the
image ).
Just return the plain text representation of this document as if you were reading it
naturally .
Turn equations into a LaTeX representation , and tables into markdown format . Remove the
headers and footers , but keep references and footnotes .
Read any natural handwriting .
This is likely one page out of several in the document , so be sure to preserve any sentences
that come from the previous page , or continue onto the next page , exactly as they are .
If there is no text at all that you think you should read , you can output null .
Do not hallucinate .
RAW_TEXT_START
{ base_text }
RAW_TEXT_END
"""

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/smolvlm_table.png",
            },
            {"type": "text", "text": PROMPT},
        ],
    }
]

text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
inputs = processor.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt"
).to(model.device)

output_ids = model.generate(**inputs, max_new_tokens=1000)
generated_ids = [output_ids[len(input_ids):] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
print(output_text)

Deploy This Model

Production-ready deployment in minutes

Together.ai

Instant API access to this model

Fastest API

Production-ready inference API. Start free, scale to millions.

Try Free API

Replicate

One-click model deployment

Easiest Setup

Run models in the cloud with simple API. No DevOps required.

Deploy Now

Disclosure: We may earn a commission from these partners. This helps keep LLMYourWay free.