Armaggheddon
yolo11-document-layout
This repository hosts three YOLOv11 models (nano, small, and medium) fine-tuned for high-performance Document Layout Analysis on the challenging DocLayNet dataset. The goal is to accurately detect and classify key layout elements in a document, such as text, tables, figures, and titles. This is a fundamental task for document understanding and information extraction pipelines. ✨ Model Highlights 🚀 Three Powerful Variants: Choose between `nano`, `small`, and `medium` models to fit your performance needs. 🎯 High Accuracy: Trained on the comprehensive DocLayNet dataset to recognize 11 distinct layout types. ⚡ Optimized for Efficiency: The recommended `yolo11n` (nano) model offers an exceptional balance of speed and accuracy, making it ideal for production environments. This Python snippet shows how to download a model from the Hub and run inference on a local document image. We fine-tuned three YOLOv11 variants, allowing you to choose the best model for your use case. `yolo11ndoclayout.pt` (train4): Recommended. The nano model offers the best trade-off between speed and accuracy. `yolo11sdoclayout.pt` (train5): A larger, slightly more accurate model. `yolo11mdoclayout.pt` (train6): The largest model, providing the highest accuracy with a corresponding increase in computational cost. As shown in the analysis below, performance gains are marginal when moving from the `small` to the `medium` model, making the `nano` and `small` variants the most practical choices. Here's how the three models stack up across key metrics. The plots compare their performance for each document layout label. | mAP@50-95 (Strict IoU) | mAP@50 (Standard IoU) | | :---: | :---: | | | | | Precision (Box Quality) | Recall (Detection Coverage) | | :---: | :---: | | | | Click to see detailed Training Metrics & Confusion Matrices | Model | Training Metrics | Normalized Confusion Matrix | | :---: | :---: | :---: | | `yolo11n` (train4) | | | | `yolo11s` (train5) | | | | `yolo11m` (train6) | | | 🏆 The Champion: Why `train4` (Nano) is the Best Choice While all nano-family models performed well, a deeper analysis revealed that `train4` stands out for its superior localization quality. We compared it against `train9` (another strong nano contender), which achieved a slightly higher recall by sacrificing bounding box precision. For applications where data integrity and accurate object boundaries are critical, `train4` is the clear winner. Key Advantages of `train4`: 1. Superior Box Precision: It delivered significantly more accurate bounding boxes, with a +9.0% precision improvement for the `title` class and strong gains for `section-header` and `table`. 2. Higher Quality Detections: It achieved a +2.4% mAP50 and +2.05% mAP50-95 improvement for the difficult `footnote` class, proving its ability to meet stricter IoU thresholds. | Box Precision Improvement | mAP50 Improvement | mAP50-95 Improvement | | :---: | :---: | :---: | | | | | In short, `train4` prioritizes quality over quantity, making it the most reliable and optimal choice for production systems. The models were trained on the DocLayNet dataset, which provides a rich and diverse collection of document images annotated with 11 layout categories: Text, Title, Section-header Table, Picture, Caption List-item, Formula Page-header, Page-footer, Footnote Training Resolution: All models were trained at 1280x1280 resolution. Initial tests at the default 640x640 resulted in a significant performance drop, especially for smaller elements like `footnote` and `caption`. This model card focuses on results and usage. For the complete end-to-end pipeline, including training scripts, dataset conversion utilities, and detailed examples, please visit the main GitHub repository:
clip-vit-base-patch32_lego-minifigure
Model Card for clip-vit-base-patch32lego-minifigure This model is a finetuned version of the `openai/clip-vit-base-patch32` CLIP (Contrastive Language-Image Pretraining) model on the `legominifigurecaptions`, specialized for matching images of Lego minifigures with their corresponding textual description. > [!NOTE] > If you are interested on the code used refer to the finetuning script on my GitHub 🤖 Minifigure Finder: Your LEGO Buddy Locator! Got a minifigure in mind but can’t recall its name or where it came from? Maybe you’ve got a picture of your favorite little guy, but no clue how to describe it? Say no more, BricksFinder has you covered! Just type in something like "red shirt, pirate hat" or upload a photo of the minifigure, and voilà! You’ll get a list of matches with images of minifigs that fit your description. It’s like a LEGO buddy GPS but way cooler. Whether you’re collecting, sorting, or just geeking out over LEGO, this tool’s here to help you connect with the minifigs you love. Try the live demo on Colab and see it in action! - Developed by: The base model has been developed by OpenAI and the finetuned model has been developed by me, Armaggheddon. - Model type: The model is a CLIP (Contrastive Language-Image Pretraining) model. - Language: The model is expects English text as input. - License: The model is licensed under the MIT license. - Finetuned from model clip-vit-base-patch32: The model is a finetuned version of the `openai/clip-vit-base-patch32` model on the `legominifigurecaptions` dataset. The model has been finetuned for 7 epochs on a 80-20 train-validation split of the dataset. For more details on the finetune script take a look at the code on my GitHub. Usage with 🤗 transformers - Load the model and processor using the following code snippet: The provided model is in float32 precision. To load the model in float16 precision to speed up inference, you can use the following code snippet: Results The goal was to obtain a model that could more accurately distinguish minifigure images based on their textual description. On this regard, in terms of accuracy, both models perform similarly. However, when testing on a classification task, with the code in the Zero-shot image classification section, the finetuned model is able to more accurately classify the images with a much greater level of confidence. For example when testing the model with the following inputs: - `a photo of a lego minifigure with a t-shirt with a pen holder` - `a photo of a lego minifigure with green pants` - `a photo of a lego minifigure with a red cap` The finetuned model outputs: - 99.76%: "a photo of a lego minifigure with a t-shirt with a pen holder" - 0.10%: "a photo of a lego minifigure with green pants" - 0.13%: "a photo of a lego minifigure with a red cap" while the base model for the same inputs gives: - 44.14%: "a photo of a lego minifigure with a t-shirt with a pen holder" - 24.36%: "a photo of a lego minifigure with green pants" - 31.50%: "a photo of a lego minifigure with a red cap" That shows how the finetuned model is able to more accurately classify the images based on their textual description. Running the same task across the whole dataset with 1 correct caption (always the first) and 2 randomly sampled ones, results in the following metrics: The plot visualizes the normalized text logits produced by the finetuned and base models: - Input: For each sample, an image of a Lego minifigure was taken, along three captions: - The correct caption that matches the image (in position 0). - Two randomly sampled, incorrect captions (in position 1 and 2). - Output: The model generated text logits for each of the captions, reflecting similarity between the image embedding and each caption embedding. These logits were then normalized for easier visualization. - Heatmap Visualization: The normalized logits are displayed as a heatmap where: - Each row represents a different input sample - Each column represents one of the three captions: the correct one (0, first row), and two of the random ones (1 and 2, second and third rows) for a given sample image. - The color intensity represents the normalized logit score assigned to each caption by the model, with darker colors indicating higher scores and this confidence (i.e. the larger the contrast between the first row with the second and third, the better the results). The base model (right), as expected, did not show high confidence in any of the classes, showing poor discrimination capability for the image and text samples, also highligted by a much smaller variation between the scores for the labels. However, in terms of accuracy, it is still able to correctly assign the correct caption on 99.98% of the samples. The finetuned model (left) shows a much higher confidence in the correct caption, with a clear distinction between the correct and incorrect captions. This is reflected in the higher scores assigned to the correct caption, and the lower ones assigned to the incorrect captions. In terms of accuracy, the finetuned model shows similar results, but are slightly lower than the base model, with an accuracy of 99.39%.